Forging a Future With Ethical AI | Spiceworks It Security

2022-09-17 04:40:25 By : Ms. Cathy Chi

As algorithms become more prevalent and more advanced, what ethical issues should companies using AI be aware of? Timon Tanné, Head of Data Science at Echobox examines bias as a key ethical challenge in AI today and in the future.

As AI becomes more interconnected with our daily lives, the ethical questions for companies and individuals have become more complex. Businesses realize the importance of ethical AI and the reputational damage that can stem from being associated with a prejudiced algorithm or one that produces unethical outputs, and this is driving change. A decade ago, AI ethics was perhaps an afterthought, regarded only in the most apparent cases of harmful output. Today, ethics are increasingly considered early in the AI project lifecycle and incorporated during the requirements gathering process. 

A few key ethical issues have been present since the early days of AI and continue to be important in a business context as technology evolves. The first is bias. 

To fully understand the problem of bias, let’s start at the beginning of the lifecycle of an algorithm – a set of instructions and logical rules that execute to achieve an outcome, essentially the building blocks of AI. One of the first stages of creating an algorithm is gathering data on which to train the model with the challenge of making it robust. In many cases, priority goes to the quantity of training data over its quality or representativeness (in terms of both the content itself being representative and coming from a diverse and representative set of sources). An algorithm may be given diverse content from the internet or other public sources as training data, and the quality of web content cannot always be ensured. Within a set of data scraped from the web, certain populations might be over- or under-represented, bias in how content is presented, and the content itself may even be false. If an algorithm is trained on biased data, its output is likely biased, and the impact can be far-reaching .  

Another issue in AI ethics that could become more prominent as technology evolves is the malicious use of algorithms. This issue is perhaps more straightforward and less prevalent than the issue of bias, making it a less significant threat in a business context. 

It’s always possible for bad actors to train an algorithm with malicious intent, and some experts warn that floods of biased data or misinformation could be deliberately released to manipulate otherwise ethical algorithms. But for most of the companies using AI algorithms, if the output is corrupt or unethical, it results from unexpected algorithmic behavior – not an intentionally malevolent action. Algorithms often function as black boxes, and even experts and data scientists cannot entirely control them. 

How can these ethical issues be corrected and even prevented as AI technology is increasingly adopted across companies of all sizes and deployed in new ways across the business? With bias being such a considerable risk for companies using AI at present, we’ll focus on three main approaches to correcting for bias when training and using algorithms: 

At the moment, no machine or algorithm has unequivocally managed to pass the Turing Test – the AI test of fame to determine if a machine can demonstrate intelligence indistinguishable from a human – though some (disputed) attempts have occurred in recent years. In the next decade, we may very well witness an intelligent system able to pass this test, which would mean that we would not be able to distinguish between communicating with this system and another human.

GPT-3 may be a key advancement in getting there. One of the largest language models in use and widely considered a breakthrough in AI, it’s capable of generating sentences and can even write article summaries or generate full stories, creative in nature, based on a prompt of a few lines. 

Certain ethical issues also surface with the advances in AI signaled by the arrival of GPT-3 and other NLP models from the “Transformers” generation. For example, these models’ output often follows the tone or style of the prompt, which can be problematic: even if the algorithm creator tries to remove bias and toxic language, the model is still capable of generating problematic content if fed with harmful or malicious prompts. 

Even with today’s version of GPT-3, it can be difficult to distinguish AI from human intelligence, but ethical issues and complexities will become even more significant as algorithms become more sophisticated and their capabilities approach that of a human. 

Minimizing ethical risk in AI and reducing bias are rooted in transparency. We must make our algorithms more transparent, we must introduce model milestones that make it possible to understand and correct the output at each stage, and we must study the diversity of biases that occur so that we can eradicate them. Of course, it’s not feasible for any person or team to do this alone. The entire AI community must collaborate to identify and implement standardized frameworks and control systems that do not exist today. We can achieve this through open sourcing models and training mechanisms. This will allow a broader set of people to determine how our models, and their behaviors, might need to change to ensure an ethical future for AI.

What are the risks companies using AI should be aware of? Share your thoughts with us on Facebook, Twitter, and LinkedIn. We’d love to know!

Head of Data Science, Echobox

On June 22, Toolbox will become Spiceworks News & Insights